492 research outputs found

    Quantum Separability and Entanglement Detection via Entanglement-Witness Search and Global Optimization

    Full text link
    We focus on determining the separability of an unknown bipartite quantum state ρ\rho by invoking a sufficiently large subset of all possible entanglement witnesses given the expected value of each element of a set of mutually orthogonal observables. We review the concept of an entanglement witness from the geometrical point of view and use this geometry to show that the set of separable states is not a polytope and to characterize the class of entanglement witnesses (observables) that detect entangled states on opposite sides of the set of separable states. All this serves to motivate a classical algorithm which, given the expected values of a subset of an orthogonal basis of observables of an otherwise unknown quantum state, searches for an entanglement witness in the span of the subset of observables. The idea of such an algorithm, which is an efficient reduction of the quantum separability problem to a global optimization problem, was introduced in PRA 70 060303(R), where it was shown to be an improvement on the naive approach for the quantum separability problem (exhaustive search for a decomposition of the given state into a convex combination of separable states). The last section of the paper discusses in more generality such algorithms, which, in our case, assume a subroutine that computes the global maximum of a real function of several variables. Despite this, we anticipate that such algorithms will perform sufficiently well on small instances that they will render a feasible test for separability in some cases of interest (e.g. in 3-by-3 dimensional systems)

    Submodular Maximization Meets Streaming: Matchings, Matroids, and More

    Full text link
    We study the problem of finding a maximum matching in a graph given by an input stream listing its edges in some arbitrary order, where the quantity to be maximized is given by a monotone submodular function on subsets of edges. This problem, which we call maximum submodular-function matching (MSM), is a natural generalization of maximum weight matching (MWM), which is in turn a generalization of maximum cardinality matching (MCM). We give two incomparable algorithms for this problem with space usage falling in the semi-streaming range---they store only O(n)O(n) edges, using O(nlog⁥n)O(n\log n) working memory---that achieve approximation ratios of 7.757.75 in a single pass and (3+Ï”)(3+\epsilon) in O(ϔ−3)O(\epsilon^{-3}) passes respectively. The operations of these algorithms mimic those of Zelke's and McGregor's respective algorithms for MWM; the novelty lies in the analysis for the MSM setting. In fact we identify a general framework for MWM algorithms that allows this kind of adaptation to the broader setting of MSM. In the sequel, we give generalizations of these results where the maximization is over "independent sets" in a very general sense. This generalization captures hypermatchings in hypergraphs as well as independence in the intersection of multiple matroids.Comment: 18 page

    Mixed state discrimination using optimal control

    Get PDF
    We present theory and experiment for the task of discriminating two nonorthogonal states, given multiple copies. We implement several local measurement schemes, on both pure states and states mixed by depolarizing noise. We find that schemes which are optimal (or have optimal scaling) without noise perform worse with noise than simply repeating the optimal single-copy measurement. Applying optimal control theory, we derive the globally optimal local measurement strategy, which outperforms all other local schemes, and experimentally implement it for various levels of noise.Comment: Corrected ref 1 date; 4 pages & 4 figures + 2 pages & 3 figures supplementary materia

    Complexity of Strong Implementability

    Full text link
    We consider the question of implementability of a social choice function in a classical setting where the preferences of finitely many selfish individuals with private information have to be aggregated towards a social choice. This is one of the central questions in mechanism design. If the concept of weak implementation is considered, the Revelation Principle states that one can restrict attention to truthful implementations and direct revelation mechanisms, which implies that implementability of a social choice function is easy to check. For the concept of strong implementation, however, the Revelation Principle becomes invalid, and the complexity of deciding whether a given social choice function is strongly implementable has been open so far. In this paper, we show by using methods from polyhedral theory that strong implementability of a social choice function can be decided in polynomial space and that each of the payments needed for strong implementation can always be chosen to be of polynomial encoding length. Moreover, we show that strong implementability of a social choice function involving only a single selfish individual can be decided in polynomial time via linear programming

    Sharing Supermodular Costs

    Get PDF
    We study cooperative games with supermodular costs. We show that supermodular costs arise in a variety of situations; in particular, we show that the problem of minimizing a linear function over a supermodular polyhedron—a problem that often arises in combinatorial optimization—has supermodular optimal costs. In addition, we examine the computational complexity of the least core and least core value of supermodular cost cooperative games. We show that the problem of computing the least core value of these games is strongly NP-hard and, in fact, is inapproximable within a factor strictly less than 17/16 unless P = NP. For a particular class of supermodular cost cooperative games that arises from a scheduling problem, we show that the Shapley value—which, in this case, is computable in polynomial time—is in the least core, while computing the least core value is NP-hard.National Science Foundation (U.S.) (DMI-0426686

    An oil pipeline design problem

    Get PDF
    Copyright @ 2003 INFORMSWe consider a given set of offshore platforms and onshore wells producing known (or estimated) amounts of oil to be connected to a port. Connections may take place directly between platforms, well sites, and the port, or may go through connection points at given locations. The configuration of the network and sizes of pipes used must be chosen to minimize construction costs. This problem is expressed as a mixed-integer program, and solved both heuristically by Tabu Search and Variable Neighborhood Search methods and exactly by a branch-and-bound method. Two new types of valid inequalities are introduced. Tests are made with data from the South Gabon oil field and randomly generated problems.The work of the first author was supported by NSERC grant #OGP205041. The work of the second author was supported by FCAR (Fonds pour la Formation des Chercheurs et l’Aide à la Recherche) grant #95-ER-1048, and NSERC grant #GP0105574

    ForestHash: Semantic Hashing With Shallow Random Forests and Tiny Convolutional Networks

    Full text link
    Hash codes are efficient data representations for coping with the ever growing amounts of data. In this paper, we introduce a random forest semantic hashing scheme that embeds tiny convolutional neural networks (CNN) into shallow random forests, with near-optimal information-theoretic code aggregation among trees. We start with a simple hashing scheme, where random trees in a forest act as hashing functions by setting `1' for the visited tree leaf, and `0' for the rest. We show that traditional random forests fail to generate hashes that preserve the underlying similarity between the trees, rendering the random forests approach to hashing challenging. To address this, we propose to first randomly group arriving classes at each tree split node into two groups, obtaining a significantly simplified two-class classification problem, which can be handled using a light-weight CNN weak learner. Such random class grouping scheme enables code uniqueness by enforcing each class to share its code with different classes in different trees. A non-conventional low-rank loss is further adopted for the CNN weak learners to encourage code consistency by minimizing intra-class variations and maximizing inter-class distance for the two random class groups. Finally, we introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. The proposed approach significantly outperforms state-of-the-art hashing methods for image retrieval tasks on large-scale public datasets, while performing at the level of other state-of-the-art image classification techniques while utilizing a more compact and efficient scalable representation. This work proposes a principled and robust procedure to train and deploy in parallel an ensemble of light-weight CNNs, instead of simply going deeper.Comment: Accepted to ECCV 201

    Exact Ground States of Large Two-Dimensional Planar Ising Spin Glasses

    Get PDF
    Studying spin-glass physics through analyzing their ground-state properties has a long history. Although there exist polynomial-time algorithms for the two-dimensional planar case, where the problem of finding ground states is transformed to a minimum-weight perfect matching problem, the reachable system sizes have been limited both by the needed CPU time and by memory requirements. In this work, we present an algorithm for the calculation of exact ground states for two-dimensional Ising spin glasses with free boundary conditions in at least one direction. The algorithmic foundations of the method date back to the work of Kasteleyn from the 1960s for computing the complete partition function of the Ising model. Using Kasteleyn cities, we calculate exact ground states for huge two-dimensional planar Ising spin-glass lattices (up to 3000x3000 spins) within reasonable time. According to our knowledge, these are the largest sizes currently available. Kasteleyn cities were recently also used by Thomas and Middleton in the context of extended ground states on the torus. Moreover, they show that the method can also be used for computing ground states of planar graphs. Furthermore, we point out that the correctness of heuristically computed ground states can easily be verified. Finally, we evaluate the solution quality of heuristic variants of the Bieche et al. approach.Comment: 11 pages, 5 figures; shortened introduction, extended results; to appear in Physical Review E 7

    Tight Kernel Bounds for Problems on Graphs with Small Degeneracy

    Full text link
    In this paper we consider kernelization for problems on d-degenerate graphs, i.e. graphs such that any subgraph contains a vertex of degree at most dd. This graph class generalizes many classes of graphs for which effective kernelization is known to exist, e.g. planar graphs, H-minor free graphs, and H-topological-minor free graphs. We show that for several natural problems on d-degenerate graphs the best known kernelization upper bounds are essentially tight.Comment: Full version of ESA 201

    The Limitations of Optimization from Samples

    Full text link
    In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature
    • 

    corecore